1,213 research outputs found

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog⁥2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln⁥2/2+o(1))n/log⁥log⁥n2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Learning strikes again: The case of the DRS signature scheme

    Get PDF
    Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich-Goldwasser-Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks.At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that has been accepted in the round 1 of the NIST post-quantum cryptography project.In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis.While we only recover partial information on the secret key, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits.</p

    Out of Oddity – New Cryptanalytic Techniques Against Symmetric Primitives Optimized for Integrity Proof Systems

    Get PDF
    International audienceThe security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic

    Quantum Lightning Never Strikes the Same State Twice

    Get PDF
    Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, investigate quantum lightning, a formalization of "collision-free quantum money" defined by Lutomirski et al. [ICS'10], where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results: - We demonstrate the usefulness of quantum lightning by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a block-chain, where transactions is instant and local. - We give win-win results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. - We construct quantum lightning under the assumed multi-collision resistance of random degree-2 systems of polynomials. - We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC'12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money schem

    Exploring Trade-offs in Batch Bounded Distance Decoding

    Get PDF
    Algorithms for solving the Bounded Distance Decoding problem (BDD) are used for estimating the security of lattice-based cryptographic primitives, since these algorithms can be employed to solve variants of the Learning with Errors problem (LWE). In certain parameter regimes where the target vector is small and/or sparse, batches of BDD instances emerge from a combinatorial approach where several components of the target vector are guessed before decoding. In this work we explore trade-offs in solving ``Batch-BDD\u27\u27, and apply our techniques to the small-secret Learning with Errors problem. We compare our techniques to previous works which solve batches of BDD instances, such as the hybrid lattice-reduction and meet-in-the-middle attack. Our results are a mixed bag. We show that, in the ``enumeration setting\u27\u27 and with BKZ reduction, our techniques outperform a variant of the hybrid attack which does not consider time-memory trade-offs in the guessing phase for certain Round5 (17-bits out of 466), Round5-IoT (19-bits out of 240), and NTRU LPrime (23-bits out of 385) parameter sets. On the other hand, our techniques do not outperform the Hybrid Attack under standard, albeit unrealistic, assumptions. Finally, as expected, our techniques do not improve on previous works in the ``sieving setting\u27\u27 (under standard assumptions) where combinatorial attacks in general do not perform well

    Revisiting the Hardness of Binary Error LWE

    Get PDF
    Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in {0,1}\{0,1\}. It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension nn. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than nn samples. In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any Ï”>0\epsilon >0, binary error LWE can be solved in polynomial time nO(1/Ï”)n^{O(1/\epsilon)} given ϔ⋅n2\epsilon\cdot n^{2} samples. Similarly, it can be solved in subexponential time 2O~(n1−α)2^{\tilde O(n^{1-\alpha})} given n1+αn^{1+\alpha} samples, for 0<α<10<\alpha<1. As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate 0<p<10 < p < 1, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than nn, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible

    Practical Cryptanalysis of a Public-key Encryption Scheme Based on Non-linear Indeterminate Equations at SAC 2017

    Get PDF
    We investigate the security of a public-key encryption scheme, the Indeterminate Equation Cryptosystem (IEC), introduced by Akiyama, Goto, Okumura, Takagi, Nuida, and Hanaoka at SAC 2017 as postquantum cryptography. They gave two parameter sets PS1 (n,p,deg X,q) = (80,3,1,921601) and PS2 (n,p,deg X,q) = (80,3,2,58982400019). The paper gives practical key-recovery and message-recovery attacks against those parameter sets of IEC through lattice basis-reduction algorithms. We exploit the fact that n = 80 is composite and adopt the idea of Gentry\u27s attack against NTRU-Composite (EUROCRYPT2001) to this setting. The summary of our attacks follows: * On PS1, we recover 84 private keys from 100 public keys in 30–40 seconds per key. * On PS1, we recover partial information of all message from 100 ciphertexts in a second per ciphertext. * On PS2, we recover partial information of all message from 100 ciphertexts in 30 seconds per ciphertext. Moreover, we also give message-recovery and distinguishing attacks against the parameter sets with prime n, say, n = 83. We exploit another subring to reduce the dimension of lattices in our lattice-based attacks and our attack succeeds in the case of deg X = 2. * For PS2’ (n,p,deg X,q) = (83,3,2,68339982247), we recover 7 messages from 10 random ciphertexts within 61,000 seconds \approx 17 hours per ciphertext. * Even for larger n, we can fnd short vector from lattices to break the underlying assumption of IEC. In our experiment, we can found such vector within 330,000 seconds \approx 4 days for n = 113
    • 

    corecore